Web Survey Bibliography
Online panels offer the benefits of lower costs, timeliness and access to large respondent numbers compared to surveys that use more traditional modes of data collection (face-to-face, telephone, and mail). Probability-based online panels in which potential respondents are selected randomly from a frame covering the target population allow for drawing valid conclusions about the population of interest. With the increase in nonresponse over the last years in surveys using the traditional modes of data collection (de Leeuw & de Heer, 2002), probability-based online panels are an appealing alternative. However, whether the quality of data collected by probability-based online panels is comparable to the quality of data attained by traditional data collection methods is questionable. Probability-based online panels are expensive in their recruitment and maintenance, thereby raising the question: are the costs of constructing and maintaining them justified given the quality of data that can be obtained?
There are several sources of errors in online panel surveys: excluding non-Internet users may result in coverage error; if persons selected for the study cannot be reached or do not want to participate, it may result in nonresponse error; study participants may choose to stop participating in later waves (attrition). Furthermore, by taking surveys regularly, respondents can learn to answer dishonestly or answer filter questions negatively to reduce the burden of participation (panel conditioning). All these errors can accumulate in the survey estimates and the conclusions based on these estimates may therefore be misleading.
The goal of this dissertation is to study the quality of data obtained with probability-based online panels. The dissertation aims at advancing the understanding of the causes of errors and at guiding the design decisions when recruiting and maintaining probability-based online panels. This dissertation evaluates the overall quality of estimates from an online panel and focuses on potential sources of error: nonresponse during the recruitment interview, nonresponse to the first online survey, panel attrition, panel conditioning, and the effects of the survey mode.
This dissertation consists of five studies, which are theoretically integrated by the overarching framework of the Total Survey Error (Biemer, 2010; Groves, 1989; Groves & Lyberg, 2010). The framework is extended by including theoretical knowledge on two special types of panel survey errors – panel nonresponse (attrition) and panel conditioning (Kalton, Kasprzyk, & McMillen, 1989). The empirical analyses are based on the data from a probability-based telephone-recruited online panel of Internet users in Germany: the GESIS Online Panel Pilot.
The error sources are studied in connection to the recruitment and operating steps typical for probability-based online panels. Each chapter studies a different aspect of data quality. The chapters are written as individual papers, each addressing specific research questions.
Chapter 1 introduces the theoretical framework of data quality and the Total Survey Error. The goal of Chapter 2 is to evaluate the goodness of the final estimates collected in the online panel. The data from the online panel are compared to the data from two high-quality face-to-face reference surveys: the German General Social Survey “ALLBUS” and the German sample of the European Social Survey (ESS). Furthermore, since researchers may not only be interested in single estimates, I study to what extent survey results are comparable when the data are used for modeling social phenomena. The results show several differences among the surveys on most of the socio-demographic and attitudinal variables; however, these differences average to a few percentage points. To account for the design decision to exclude non-Internet users, post-survey adjustments were performed. However, post-stratification weighting did not bring the estimates from the online panel closer to the estimates from the reference surveys.
Chapter 3 focuses on nonresponse, studying the influence of respondent characteristics, design features (incentives and fieldwork agency), and respondent characteristics specific to the survey mode (Internet experience, online survey experience). The results indicate that participation in the panel is selective: previous experience with the Internet and online surveys predicts willingness to participate and actual participation in the panel. Incentives and fieldwork agencies that performed the recruitment also influence the decision to participate.
Chapter 4 studies why panel members choose to stay in the panel or discontinue participation. The main question is whether respondents are motivated by intrinsic factors (survey experience) or extrinsic factors (monetary incentives). The findings indicate that respondents who view surveys as long, difficult, too personal are likely to attrite and that incentives (although negatively related to attrition) do not compensate for this burdensome experience.
Chapter 5 focuses on panel conditioning due to learning the survey process. To find out if more experienced respondents answer differently than less experienced respondents, I conducted two experiments, in which the order of the questionnaires was switched. The findings indicate limited evidence of advantageous panel conditioning and no evidence of disadvantageous panel conditioning.
Chapter 6 studies mode system effects (i.e., differences in the estimates as the result of the whole process by which they were collected). The data from the online panel is compared to the two reference surveys: ALLBUS 2010 and ALLBUS 2012. Both face-to-face surveys were recruited by the same fieldwork agency and employed an almost identical design. Therefore, they together serve as a “reference mode” when compared with the online panel. I use questions with identical wordings that were present in both ALLBUS surveys and replicated in the online panel. The differences in sample composition among the surveys are adjusted by propensity-score weighting. The results show that the online panel and two reference surveys differ in attitudinal measures. However, for factual questions the reference surveys differ from the online panel and not from each other. Judging by the effect sizes, the magnitude of the differences is small. The overall conclusion is that data from the online panel is fairly comparable to the data from high-quality face-to-face surveys.
The results of this dissertation provide additional insight into the processes that contribute to the quality of data produced by probability-based online panels. These results can guide researchers who plan to build online panels of Internet users or of the general population. The results will also prove useful for existing panels that consider switching to the online mode.
Web survey bibliography - Germany (361)
- Interviewer effects on onliner and offliner participation in the German Internet Panel; 2017; Herzing, J. M. E.; Blom, A. G.; Meuleman, B.
- Comparing the same Questionnaire between five Online Panels: A Study of the Effect of Recruitment Strategy...; 2017; Schnell, R.; Panreck, L.
- Push2web or less is more? Experimental evidence from a mixed-mode population survey at the community...; 2017; Neumann, R.; Haeder, M.; Brust, O.; Dittrich, E.; von Hermanni, H.
- Social Desirability and Undesirability Effects on Survey Response latencies; 2017; Andersen, H.; Mayerl, J.
- Comparison of response patterns in different survey designs: a longitudinal panel with mixed-mode and...; 2017; Ruebsamen, N.; Akmatov, M. K.; Castell, S.; Karch, A.; Mikolajczyk, R. T.
- Mobile Research im Kontext der digitalen Transformation; 2017; Friedrich-Freksa, M.
- Kognitives Pretesting; 2017; Neuert, C.
- Grundzüge des Datenschutzrechts und aktuelle Datenschutzprobleme in der Markt- und Sozialforschung; 2017; Schweizer, A.
- Article Establishing an Open Probability-Based Mixed-Mode Panel of the General Population in Germany...; 2017; Bosnjak, M.; Dannwolf, T.; Enderle, T.; Schaurer, I.; Struminskaya, B.; Tanner, A.; Weyandt, K.
- Socially Desirable Responding in Web-Based Questionnaires: A Meta-Analytic Review of the Candor Hypothesis...; 2016; Gnambs, T.; Kaspar, K.
- Methodological Aspects of Central Left-Right Scale Placement in a Cross-national Perspective; 2016; Scholz, E.; Zuell, C.
- Predicting and Preventing Break-Offs in Web Surveys; 2016; Mittereder, F.
- Incorporating eye tracking into cognitive interviewing to pretest survey questions; 2016; Neuert, C.; Lenzner, T.
- Geht’s auch mit der Maus? – Eine Methodenstudie zu Online-Befragungen in der Jugendforschung...; 2016; Heim, R.; Konowalczyk, S.; Grgic, M.; Seyda, M.; Burrmann, U.; Rauschenbach, T.
- Comparing Cognitive Interviewing and Online Probing: Do They Find Similar Results?; 2016; Meitinger, K., Behr, D.
- Device Effects - How different screen sizes affect answers in online surveys; 2016; Fisher, B.; Bernet, F.
- Effects of motivating question types with graphical support in multi channel design studies; 2016; Luetters, H.; Friedrich-Freksa, M.; Vitt, SGoldstein, D. G.
- Analyzing Cognitive Burden of Survey Questions with Paradata: A Web Survey Experiment; 2016; Hoehne, J. K.; Schlosser, S.; Krebs, D.
- Secondary Respondent Consent in the German Family Panel; 2016; Schmiedeberg, C.; Castiglioni, L.; Schroeder, J.
- Does Changing Monetary Incentive Schemes in Panel Studies Affect Cooperation? A Quasi-experiment on...; 2016; Schaurer, I.; Bosnjak, M.
- Using Cash Incentives to Help Recruitment in a Probability Based Web Panel: The Effects on Sign Up Rates...; 2016; Krieger, U.
- The Mobile Web Only Population: Socio-demographic Characteristics and Potential Bias ; 2016; Fuchs, M.; Metzler, A.
- The Impact of Scale Direction, Alignment and Length on Responses to Rating Scale Questions in a Web...; 2016; Keusch, F.; Liu, M.; Yan, T.
- Web Surveys Versus Other Survey Modes: An Updated Meta-analysis Comparing Response Rates ; 2016; Wengrzik, J.; Bosnjak, M.; Lozar Manfreda, K.
- Retrospective Measurement of Students’ Extracurricular Activities with a Self-administered Calendar...; 2016; Furthmueller, P.
- Privacy Concerns in Responses to Sensitive Questions. A Survey Experiment on the Influence of Numeric...; 2016; Bader, F., Bauer, J., Kroher, M., Riordan, P.
- Ballpoint Pens as Incentives with Mail Questionnaires – Results of a Survey Experiment; 2016; Heise, M.
- Does survey mode matter for studying electoral behaviour? Evidence from the 2009 German Longitudinal...; 2016; Bytzek, E.; Bieber, I. E.
- Forecasting proportional representation elections from non-representative expectation surveys; 2016; Graefe, A.
- Setting Up an Online Panel Representative of the General Population The German Internet Panel; 2016; Blom, A. G.; Gathmann, C.; Krieger, U.
- Online Surveys are Mixed-Device Surveys. Issues Associated with the Use of Different (Mobile) Devices...; 2016; Toepoel, V.; Lugtig, P. J.
- Stable Relationships, Stable Participation? The Effects of Partnership Dissolution and Changes in Relationship...; 2016; Mueller, B.; Castiglioni, L.
- Will They Stay or Will They Go? Personality Predictors of Dropout in Online Study; 2016; Nestler, S.; Thielsch, M.; Vasilev, E.; Back, M.
- Respondent Conditioning in Online Panel Surveys: Results of Two Field Experiments; 2016; Struminskaya, B.
- A Privacy-Friendly Method to Reward Participants of Online-Surveys; 2015; Herfert, M.; Lange, B.; Selzer, A.; Waldmann, U.
- The impact of frequency rating scale formats on the measurement of latent variables in web surveys -...; 2015; Menold, N.; Kemper, C. J.
- Investigating response order effects in web surveys using eye tracking; 2015; Karem Hoehne, J.; Lenzner, T.
- Implementation of the forced answering option within online surveys: Do higher item response rates come...; 2015; Decieux, J. P.; Mergener, A.; Neufang, K.; Sischka, P.
- Translating Answers to Open-ended Survey Questions in Cross-cultural Research: A Case Study on the Interplay...; 2015; Behr, D.
- The Effects of Questionnaire Completion Using Mobile Devices on Data Quality. Evidence from a Probability...; 2015; Bosnjak, M.; Struminskaya, B.; Weyandt, K.
- Are they willing to use the web? First results of a possible switch from PAPI to CAPI/CAWI in an establishment...; 2015; Ellguth, P.; Kohaut, S.
- Measuring Political Knowledge in Web-Based Surveys: An Experimental Validation of Visual Versus Verbal...; 2015; Munzert, S.; Selb, P.
- Changing from CAPI to CAWI in an ongoing household panel - experiences from the German Socio-Economic...; 2015; Schupp, J.; Sassenroth, D.
- Rating Scales in Web Surveys: A Test of New Drag-and-Drop Rating Procedures; 2015; Kunz, T.
- Mode System Effects in an Online Panel Study: Comparing a Probability-based Online Panel with two Face...; 2015; Struminskaya, B.; De Leeuw, E. D.; Kaczmirek, L.
- Higher response rates at the expense of validity? Consequences of the implementation of the ‘forced...; 2015; Decieux, J. P.; Mergener, A.; Neufang, K.; Sischka, P.
- A quasi-experiment on effects of prepaid versus promised incentives on participation in a probability...; 2015; Schaurer, I.; Bosnjak, M.
- Response Effects of Prenotification, Prepaid Cash, Prepaid Vouchers, and Postpaid Vouchers: An Experimental...; 2015; van Veen, F.; Goeritz, A.; Sattler, S.
- Recruiting Respondents for a Mobile Phone Panel: The Impact of Recruitment Question Wording on Cooperation...; 2015; Busse, B.; Fuchs, M.
- The Influence of the Answer Box Size on Item Nonresponse to Open-Ended Questions in a Web Survey ; 2015; Zuell, C.; Menold, N.; Koerber, S.